Lean Proof Explanation

exists_reduction: Reducing a list of transpositions

Overview

This lemma proves that when you append a transposition (a b) to a list M of transpositions, one of two outcomes occurs: either the product can be reduced (cancellation happens, shortening the list by 1), or it can be factored as (a z) · M' where all transpositions in M' fix the element a. This is a key step in algorithms that normalize permutations by repeatedly applying move_swap_left. The proof uses structural induction on the list M (via reverseRecOn), processing transpositions from right to left. At each step, we apply move_swap_left to the rightmost transposition s and the new (a b), then recursively handle the remaining list. The two base cases (cancellation vs. factorization) propagate through the induction, maintaining the invariant that we either reduce the length or factor out a transposition fixing a.

📝 Full Proof Code (Click any line for explanation)

Interactive Lean 4 proof — click a line to see its strategy and explanation inline

1lemma exists_reduction {α : Type*} [DecidableEq α]
2    (M : List (Perm α)) (a b : α) (hab : ab) (h_swaps : ∀ sM, s.IsSwap) :
3    (∃ M' : List (Perm α), M'.length + 2 = M.length + 1 ∧ (∀ sM', s.IsSwap) ∧
4    M'.prod = (M ++ [swap a b]).prod)
5    ∨ (∃ z, za ∧ ∃ M' : List (Perm α), (∀ sM', s.IsSwaps a = a) ∧
6    (M ++ [swap a b]).prod = swap a z * M'.prod) := by
lemma exists_reduction {α : Type*} [DecidableEq α] (M : List (Perm α)) (a b : α) (hab : ab) (h_swaps : ∀ sM, s.IsSwap) : (∃ M' : List (Perm α), M'.length + 2 = M.length + 1 ∧ (∀ sM', s.IsSwap) ∧ M'.prod = (M ++ [swap a b]).prod) ∨ (∃ z, za ∧ ∃ M' : List (Perm α), (∀ sM', s.IsSwaps a = a) ∧ (M ++ [swap a b]).prod = swap a z * M'.prod)

Strategy: State the lemma — appending (a b) to a list M of transpositions either reduces the length or factors out (a z).

Explanation: Given a list M of transpositions and a new transposition (a b) with a ≠ b, we prove a disjunction: either (1) there exists a shorter list M' with M'.length + 2 = M.length + 1 (i.e., we reduced by 1) such that M'.prod equals (M ++ [swap a b]).prod, or (2) the product factors as (a z) · M'.prod where all transpositions in M' fix a. This captures the two outcomes of move_swap_left when applied iteratively: cancellation (reduction) or factorization (moving a to the left).

7  induction M using List.reverseRecOn generalizing a b
induction M using List.reverseRecOn generalizing a b

Strategy: Induct on the list M using reverseRecOn to process from right to left.

Explanation: We use structural induction on M via List.reverseRecOn, which provides two cases: (1) nil (empty list), and (2) append_singleton (list L with a single element s appended). This allows us to process the list from right to left, matching the natural order of multiplication: (M ++ [swap a b]).prod = M.prod * swap a b. We generalize over a and b so the induction hypothesis can be applied with different elements (specifically, a and z when we apply move_swap_left).

8  case nil => right; refineb, hab.symm, [], by simp, by simp
case nil => right; refineb, hab.symm, [], by simp, by simp

Strategy: Base case — if M is empty, factor as (a b) · [] (the identity).

Explanation: When M = [], we have ([] ++ [swap a b]).prod = swap a b. We choose the right disjunct (factorization) with z := b (satisfying b ≠ a by hab.symm) and M' := []. The empty list trivially satisfies "all elements fix a" (vacuously true), and swap a b = swap a b * 1 by simp. This establishes the base case for the induction.

9  case append_singleton L s ih =>
case append_singleton L s ih =>

Strategy: Inductive case — assume M = L ++ [s] and we have an induction hypothesis ih for L.

Explanation: In the inductive step, we assume M can be written as L ++ [s] for some list L and transposition s. The induction hypothesis ih states that the lemma holds for L (with arbitrary a and b). We'll apply move_swap_left to s and swap a b, then use ih to handle the remaining list L.

10    have hs : s.IsSwap := by simp_all
11    have hL_swaps : ∀ xL, x.IsSwap := by simp_all
have hs : s.IsSwap := by simp_all have hL_swaps : ∀ xL, x.IsSwap := by simp_all

Strategy: Extract hypotheses — s is a swap, and all elements of L are swaps.

Explanation: From the hypothesis h_swaps : ∀ x ∈ M, x.IsSwap and the fact that M = L ++ [s], we deduce that s.IsSwap (since s ∈ M) and ∀ x ∈ L, x.IsSwap (since L ⊆ M). The tactic simp_all simplifies and uses the context to close both goals. These facts will be needed when applying move_swap_left and the induction hypothesis.

12    rcases move_swap_left hab s hs with h_cancel | ⟨z, hza, τ, hτ_swap, hτ_fix, h_comm
rcases move_swap_left hab s hs with h_cancel | ⟨z, hza, τ, hτ_swap, hτ_fix, h_comm

Strategy: Apply move_swap_left to s and swap a b, case-splitting on cancellation vs. factorization.

Explanation: We invoke move_swap_left with a ≠ b (hab), the transposition s, and the proof hs that s is a swap. This gives us two cases: (1) h_cancel : s * swap a b = 1 (cancellation), or (2) witnesses z ≠ a, τ a swap fixing a, and h_comm : s * swap a b = swap a z * τ (factorization). We use rcases to destructure the result and proceed with case analysis.

13    · left
14      refineL, by simp, hL_swaps, ?_⟩
15      simp [List.prod_append, h_cancel]
· left refineL, by simp, hL_swaps, ?_⟩ simp [List.prod_append, h_cancel]

Strategy: Cancellation case — s and (a b) cancel, so the reduced list is just L.

Explanation: When s * swap a b = 1, we have (L ++ [s] ++ [swap a b]).prod = L.prod * s * swap a b = L.prod * 1 = L.prod. We choose the left disjunct (reduction) with M' := L. The length condition L.length + 2 = (L ++ [s]).length + 1 simplifies to L.length + 2 = L.length + 1 + 1, which is true. All elements of L are swaps (hL_swaps), and the product equality follows from h_cancel. This closes the cancellation branch.

16    · specialize ih a z hza.symm hL_swaps
· specialize ih a z hza.symm hL_swaps

Strategy: Factorization case — apply the induction hypothesis to L with the new pair (a, z).

Explanation: In the factorization case, we have s * swap a b = swap a z * τ where z ≠ a and τ fixes a. We now need to handle L ++ [swap a z]. We specialize the induction hypothesis ih with the elements a and z (note: z ≠ a becomes a ≠ z via hza.symm), and the hypothesis hL_swaps that all elements of L are swaps. This gives us the reduction/factorization for (L ++ [swap a z]).prod.

17      rcases ih withL', hlen, hops', hprod⟩ | ⟨w, hwa, L', hops', hprod
rcases ih withL', hlen, hops', hprod⟩ | ⟨w, hwa, L', hops', hprod

Strategy: Case-split on the result of the induction hypothesis — reduction or factorization for L.

Explanation: The induction hypothesis ih gives us two cases for (L ++ [swap a z]).prod: (1) reduction — there exists L' with L'.length + 2 = L.length + 1, all elements are swaps, and L'.prod = (L ++ [swap a z]).prod, or (2) factorization — (L ++ [swap a z]).prod = swap a w * L'.prod where w ≠ a and all elements of L' fix a. We use rcases to handle both cases separately.

18      · left
19        refineL' ++ [τ], ?_, ?_, ?_⟩
· left refineL' ++ [τ], ?_, ?_, ?_⟩

Strategy: IH reduction case — if L reduced, append τ to get the final reduced list.

Explanation: If the induction hypothesis gave us a reduction for L (case 1), we have L'.prod = (L ++ [swap a z]).prod where L'.length + 2 = L.length + 1. We need to show that (L ++ [s] ++ [swap a b]).prod can be reduced. Using h_comm : s * swap a b = swap a z * τ, we can rewrite the product to involve (L ++ [swap a z]) * τ. The reduced list is L' ++ [τ], and we must verify three properties: length, all swaps, and product equality.

20        · simp [List.length_append]; omega
· simp [List.length_append]; omega

Strategy: Verify the length condition using arithmetic.

Explanation: We must show (L' ++ [τ]).length + 2 = (L ++ [s]).length + 1. Simplifying with List.length_append, this becomes (L'.length + 1) + 2 = (L.length + 1) + 1, i.e., L'.length + 3 = L.length + 2. From the IH, we have L'.length + 2 = L.length + 1, so L'.length = L.length - 1. Substituting gives (L.length - 1) + 3 = L.length + 2, which simplifies to L.length + 2 = L.length + 2. The omega tactic solves this linear arithmetic goal.

21        · intro x hx
22          rw [List.mem_append, List.mem_singleton] at hx
23          rcases hx with hx | rfl
24          · exact hops' x hx
25          · exact hτ_swap
· intro x hx rw [List.mem_append, List.mem_singleton] at hx rcases hx with hx | rfl · exact hops' x hx · exact hτ_swap

Strategy: Prove all elements of L' ++ [τ] are swaps.

Explanation: We must show ∀ x ∈ L' ++ [τ], x.IsSwap. For any x in the list, by List.mem_append and List.mem_singleton, either x ∈ L' or x = τ. If x ∈ L', then x.IsSwap by hops' (from the IH). If x = τ, then x.IsSwap by hτ_swap (from move_swap_left). This case analysis closes the goal.

26        · simp [List.prod_append, h_comm, mul_assoc, hprod]
· simp [List.prod_append, h_comm, mul_assoc, hprod]

Strategy: Prove the product equality using h_comm and the IH product equation.

Explanation: We must show (L' ++ [τ]).prod = (L ++ [s] ++ [swap a b]).prod. By List.prod_append, the RHS is L.prod * s * swap a b. Using h_comm : s * swap a b = swap a z * τ, this becomes L.prod * (swap a z * τ) = (L.prod * swap a z) * τ by mul_assoc. From the IH, hprod : L'.prod = (L ++ [swap a z]).prod = L.prod * swap a z. Substituting gives L'.prod * τ = (L' ++ [τ]).prod, which is the goal. The simp tactic with these lemmas closes this.

27      · right
28        refinew, hwa, L' ++ [τ], ?_, ?_⟩
· right refinew, hwa, L' ++ [τ], ?_, ?_⟩

Strategy: IH factorization case — if L factored as (a w) · M', append τ and factor the entire product.

Explanation: If the induction hypothesis gave us a factorization for L (case 2), we have (L ++ [swap a z]).prod = swap a w * L'.prod where w ≠ a and all elements of L' fix a. We choose the right disjunct (factorization) with the same z' := w and the extended list L' ++ [τ]. We must verify two properties: (1) all elements fix a, and (2) the product equality.

29        · intro x hx
30          rw [List.mem_append, List.mem_singleton] at hx
31          rcases hx with hx | rfl
32          · exact hops' x hx
33          · exacthτ_swap, hτ_fix
· intro x hx rw [List.mem_append, List.mem_singleton] at hx rcases hx with hx | rfl · exact hops' x hx · exacthτ_swap, hτ_fix

Strategy: Prove all elements of L' ++ [τ] are swaps that fix a.

Explanation: We must show ∀ x ∈ L' ++ [τ], x.IsSwap ∧ x a = a. For any x in the list, either x ∈ L' or x = τ. If x ∈ L', then x.IsSwap ∧ x a = a by hops' (from the IH factorization case). If x = τ, then x.IsSwap by hτ_swap and x a = a by hτ_fix (both from move_swap_left). This case analysis closes the goal.

34        · simp only [List.prod_append, List.prod_singleton] at hprod
35          rw [mul_assoc, h_comm, ← mul_assoc, hprod, mul_assoc]
· simp only [List.prod_append, List.prod_singleton] at hprodrw [mul_assoc, h_comm, ← mul_assoc, hprod, mul_assoc]

Strategy: Prove the product factorization using h_comm and the IH factorization equation.

Explanation: We must show (L ++ [s] ++ [swap a b]).prod = swap a w * (L' ++ [τ]).prod. The LHS is L.prod * s * swap a b. Using h_comm : s * swap a b = swap a z * τ, this becomes L.prod * (swap a z * τ). By mul_assoc, this is (L.prod * swap a z) * τ. From the IH, hprod : (L ++ [swap a z]).prod = swap a w * L'.prod, which simplifies to L.prod * swap a z = swap a w * L'.prod. Substituting gives (swap a w * L'.prod) * τ = swap a w * (L'.prod * τ) by associativity = swap a w * (L' ++ [τ]).prod. The sequence of rewrites closes this goal, completing the proof.

💡 Tip: Click on any line above to see its strategy and explanation inline. Click again to hide it.

Proof Structure Summary

The proof uses structural induction on the list M via List.reverseRecOn, processing transpositions from right to left:

Key insight: The lemma maintains the invariant that appending a transposition to a list either reduces the length (via cancellation) or factors out a transposition fixing a specific element. This is the foundation for algorithms that normalize permutations to canonical forms by iteratively "moving elements to the left" until all cancellations are performed and the remaining product is in a standard factorization.

Key Lemmas and Techniques Used

1. move_swap_left

The core algebraic lemma — for any transposition σ and (a b), either σ · (a b) = 1 (cancellation), or σ · (a b) = (a z) · τ where τ fixes a. This lemma is applied at each inductive step to handle the rightmost transposition s composed with swap a b. The two cases (cancellation and factorization) drive the two branches of the exists_reduction proof.

2. List.reverseRecOn

A structural induction principle for lists that processes from right to left. For a list M, we get two cases: (1) M = [] (base), and (2) M = L ++ [s] (inductive). This matches the natural multiplication order (M ++ [x]).prod = M.prod * x, making the proof flow naturally. We generalize over a and b to allow the IH to be applied with different elements.

3. List.prod_append and mul_assoc

List.prod_append states (L ++ M).prod = L.prod * M.prod, allowing us to split products across list concatenation. Combined with mul_assoc (associativity of multiplication), we can rearrange products to match the factorizations from move_swap_left and the induction hypothesis. These are the workhorses for manipulating product equalities.

4. omega (arithmetic solver)

An automated tactic for linear integer arithmetic. Used to verify length conditions like (L'.length + 1) + 2 = (L.length + 1) + 1 given the IH constraint L'.length + 2 = L.length + 1. This handles the bookkeeping of list lengths during reduction, ensuring the "length decreased by 1" property is maintained correctly through the induction.

Mathematical Significance

This lemma is a crucial component of algorithms for normalizing permutations represented as products of transpositions. Given a permutation π = τ₁ · τ₂ · ... · τₙ (a product of transpositions), we can repeatedly apply exists_reduction to simplify the representation: either cancel adjacent transpositions (reducing length), or factor out transpositions that fix a chosen element a. By iterating this process, we can compute canonical forms such as cycle decompositions, determine the sign of a permutation (via the parity of reductions), and verify permutation equivalence. The proof demonstrates a powerful pattern: structural induction + local rewriting (move_swap_left) = global normalization. This technique generalizes to many problems in computational group theory and algebraic combinatorics.

Algorithm Perspective

From an algorithmic viewpoint, exists_reduction provides a termination certificate for permutation reduction algorithms. Each application either shortens the list (reduction case) or maintains length while factoring out a fixing transposition (factorization case). By tracking the number of reductions, we can bound the number of steps needed to reach a normal form. This is the foundation for efficient algorithms that compute with permutations in polynomial time, avoiding exponential blowup from naive representations. The constructive nature of the proof (we explicitly build M' in each case) means the lemma can be computationally realized, not just used for reasoning about existence.